神经网络的一种众所周知的故障模式对应于高置信度错误的预测,尤其是对于训练分布有所不同的数据。这种不安全的行为限制了其适用性。为此,我们表明可以通过在其内部表示中添加约束来定义提供准确置信度的模型。也就是说,我们将类标签编码为固定的唯一二进制向量或类代码,并使用这些标签来在整个模型中强制执行依赖类的激活模式。结果预测因子被称为总激活分类器(TAC),而TAC用作基础分类器的附加组件,以指示预测的可靠性。给定数据实例,TAC切片中间表示分为不相交集,并将此类切片减少到标量中,从而产生激活曲线。在培训期间,将激活轮廓推向分配给给定培训实例的代码。在测试时,可以预测与最匹配示例激活曲线的代码相对应的类。从经验上讲,我们观察到激活模式及其相应代码之间的相似之处导致一种廉价的无监督方法来诱导歧视性置信度得分。也就是说,我们表明TAC至少与从现有模型中提取的最新置信度得分一样好,同时严格改善了模型在拒绝设置上的价值。还观察到TAC在多种类型的架构和数据模式上都很好地工作。
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Multi-object state estimation is a fundamental problem for robotic applications where a robot must interact with other moving objects. Typically, other objects' relevant state features are not directly observable, and must instead be inferred from observations. Particle filtering can perform such inference given approximate transition and observation models. However, these models are often unknown a priori, yielding a difficult parameter estimation problem since observations jointly carry transition and observation noise. In this work, we consider learning maximum-likelihood parameters using particle methods. Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates. By contrast, we exploit Fisher's identity to obtain a particle-based approximation of the score function (the gradient of the log likelihood) that yields a low variance estimate while only requiring stepwise differentiation through the transition and observation models. We apply our method to real data collected from autonomous vehicles (AVs) and show that it learns better models than existing techniques and is more stable in training, yielding an effective smoother for tracking the trajectories of vehicles around an AV.
translated by 谷歌翻译
The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive {\delta}, a set S of features is {\delta}-relevant if, when the features in S are fixed, the probability of getting the target class exceeds {\delta}. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.
translated by 谷歌翻译
This study concerns the formulation and application of Bayesian optimal experimental design to symbolic discovery, which is the inference from observational data of predictive models taking general functional forms. We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior. A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
translated by 谷歌翻译
Transformers are powerful visual learners, in large part due to their conspicuous lack of manually-specified priors. This flexibility can be problematic in tasks that involve multiple-view geometry, due to the near-infinite possible variations in 3D shapes and viewpoints (requiring flexibility), and the precise nature of projective geometry (obeying rigid laws). To resolve this conundrum, we propose a "light touch" approach, guiding visual Transformers to learn multiple-view geometry but allowing them to break free when needed. We achieve this by using epipolar lines to guide the Transformer's cross-attention maps, penalizing attention values outside the epipolar lines and encouraging higher attention along these lines since they contain geometrically plausible matches. Unlike previous methods, our proposal does not require any camera pose information at test-time. We focus on pose-invariant object instance retrieval, where standard Transformer networks struggle, due to the large differences in viewpoint between query and retrieved images. Experimentally, our method outperforms state-of-the-art approaches at object retrieval, without needing pose information at test-time.
translated by 谷歌翻译
对于诸如搜索和救援之类的苛刻情况下,人形生物的部署,高度智能的决策和熟练的感觉运动技能。一个有前途的解决方案是通过远程操作通过互连机器人和人类来利用人类的实力。为了创建无缝的操作,本文提出了一个动态的远程组分框架,该框架将人类飞行员的步态与双皮亚机器人的步行同步。首先,我们介绍了一种方法,以从人类飞行员的垫脚行为中生成虚拟人类步行模型,该模型是机器人行走的参考。其次,步行参考和机器人行走的动力学通过向人类飞行员和机器人施加力来同步,以实现两个系统之间的动态相似性。这使得人类飞行员能够不断感知并取消步行参考和机器人之间的任何异步。得出机器人的一致步骤放置策略是通过步骤过渡来维持动态相似性的。使用我们的人机界面,我们证明了人类飞行员可以通过地位,步行和干扰拒绝实验实现模拟机器人的稳定和同步近距离运行。这项工作为将人类智力和反射转移到人形机器人方面提供了基本的一步。
translated by 谷歌翻译
Teleperation已成为全自动系统,以实现人类机器人的人体水平能力的替代解决方案。具体而言,全身控制的远程运行是指挥类人动物的有前途的无提手术策略,但需要更多的身体和心理努力。为了减轻这一限制,研究人员提出了共享控制方法,结合了机器人决策,以帮助人类完成低级任务,从而进一步减少了运营工作。然而,尚未探索用于全身级别的人型类人形端粒体的共享控制方法。在这项工作中,我们研究了全身反馈如何影响不同环境中不同共享控制方法的性能。提出了时间衍生的Sigmoid功能(TDSF),以产生障碍物的更直观的力反馈。进行了全面的人类实验,结果得出的结论是,力反馈增强了在不熟悉的环境中的全身端粒化表现,但可以在熟悉的环境中降低性能。通过触觉传达机器人的意图显示出进一步的改进,因为操作员可以将力反馈用于短途计划和视觉反馈进行长距离计划。
translated by 谷歌翻译
卷积神经网络已在图像分类方面取得了成功的结果,从而实现了超过人类水平的实时结果。但是,纹理图像仍然对这些模型构成一些挑战,例如,在出现这些图像,高层间相似性,没有代表对象的全局观点的几个问题中,培训的数据可用性有限,并且其他。在这种情况下,本文的重点是提高纹理分类中卷积神经网络的准确性。这是通过从验证的神经网络的多个卷积层中提取特征并使用Fisher载体聚集此类特征来完成的。使用较早卷积层的特征的原因是获得了较少域的信息。我们验证方法对基准数据集的纹理分类以及巴西植物物种识别的实际任务的有效性。在这两种情况下,在多层上计算出的Fisher矢量都优于制作方法,证实早期卷积层提供了有关分类纹理图像的重要信息。
translated by 谷歌翻译
本文介绍了素描的现实,这种方法结合了AR素描和驱动的有形用户界面(TUI),用于双向素描交互。双向草图使虚拟草图和物理对象通过物理驱动和数字计算相互影响。在现有的AR素描中,虚拟世界和物理世界之间的关系只是一个方向 - 虽然物理互动会影响虚拟草图,但虚拟草图对物理对象或环境没有返回效果。相反,双向素描相互作用允许草图和驱动的tuis之间的无缝耦合。在本文中,我们采用桌面大小的小型机器人(Sony Toio)和基于iPad的AR素描工具来演示该概念。在我们的系统中,在iPad上绘制和模拟的虚拟草图(例如,线,墙壁,摆和弹簧)可以移动,动画,碰撞和约束物理Toio机器人,就像虚拟草图和物理对象存在于同一空间中一样通过AR和机器人运动之间的无缝耦合。本文贡献了一组新型的互动和双向AR素描的设计空间。我们展示了一系列潜在的应用,例如有形的物理教育,可探索的机制,儿童有形游戏以及通过素描的原位机器人编程。
translated by 谷歌翻译